Programming Data and Task Parallelism with Chapel
نویسنده
چکیده
Chapel is a new global-view parallel programming language developed by Cray Inc. that represents a new direction in programming parallel machines. In this paper, we present two data parallel and two task parallel algorithms written in Chapel to show the effectiveness of the language in specifying parallel algorithm and computation.
منابع مشابه
Task Parallelism and Synchronization: An Overview of Explicit Parallel Programming Languages
Programming parallel machines as effectively as sequential ones would ideally require a language that provides high-level programming constructs in order to avoid the programming errors frequent when expressing parallelism. Since task parallelism is often considered more error-prone than data parallelism, we survey six popular and efficient parallel programming languages that tackle this diffic...
متن کاملTask Parallelism and Data Distribution: An Overview of Explicit Parallel Programming Languages
Programming parallel machines as effectively as sequential ones would ideally require a language that provides high-level programming constructs to avoid the programming errors frequent when expressing parallelism. Since task parallelism is considered more error-prone than data parallelism, we survey six popular and efficient parallel language designs that tackle this difficult issue: Cilk, Cha...
متن کاملLLVM Optimizations for PGAS Programs Case study: LLVMWide Pointer Optimizations in Chapel
PGAS programming languages such as Chapel, Coarray Fortran, Habanero-C, UPC and X10 [3–6, 8] support high-level and highly productive programming models for large-scale parallelism. Unlike messagepassing models such as MPI, which introduce nontrivial complexity due to message passing semantics, PGAS languages simplify distributed parallel programming by introducing higher level parallel languag...
متن کاملWork-First and Help-First Scheduling Policies for Terminally Strict Parallel Programs
Multiple programming models are emerging to address an increased need for dynamic task parallelism in applications for multicore processors and shared-addressspace parallel computing. Examples include OpenMP 3.0, Java Concurrency Utilities, Microsoft Task Parallel Library, Intel Thread Building Blocks, Cilk, X10, Chapel, and Fortress. Scheduling algorithms based on work stealing, as embodied in...
متن کاملCommunication and Computation Overlap through Task Synchronization in Multi-Locale Chapel Environment
Parallel processing systems use data parallelism to achieve high performance data processing. Data parallelism is normally based on data arrays, which are distributed to separate nodes. Therefore, efficient communication between nodes is required to initialize the distribution. In this paper, we propose a computation and communication overlapping technique to reduce the overhead of communicatio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009